random sample
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Europe > United Kingdom > England > Bristol (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Education (1.00)
- Health & Medicine > Consumer Health (0.92)
- Law (0.67)
SupplementaryMaterial: BetterSafeThanSorry: PreventingDelusiveAdversarieswith AdversarialTraining
The initial learning rate is set to 0.1. A.2 AdversarialTraining Unless otherwise specified, we perform adversarial training to train robust classifiers by following Madry etal.[74]. Specifically,we train against aprojected gradient descent (PGD) adversary, starting from a random initial perturbation of the training data. Unless otherwise specified, we use the values of provided in Table 5 to train our models. We use 7 steps of PGD with a step size of/5. A.3 DelusiveAdversaries Six delusive attacks are considered to validate our proposed defense.
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- North America > United States > California > Santa Barbara County > Santa Barbara (0.04)
- North America > Dominican Republic (0.04)
- Europe > Poland (0.04)
- (4 more...)
LearningtoSeebyLookingatNoise-Supplementary Material
Dead leaves - Textures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wetrain with stochastic gradient descent with momentum (set to0.9)for200epochs, starting with a learning rate of0.36 and decaying itby afactor of0.1atepochs155,170and185. The dimensionality of the last and the penultimate embedding are 128 and 4096 respectively. From left to right the columns correspond to the tasks: EuroSAT, Resisc45, Diabetic Retinopathy and Patch Camelyon. Here, wepresent additional data forthese experiments, and provide thefull distributions forthese criteria and all datasets.
Estimating the Size of a Large Network and its Communities from a Random Sample
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.
Fine-Tuning Large Language Models with QLoRA for Offensive Language Detection in Roman Urdu-English Code-Mixed Text
Hussain, Nisar, Qasim, Amna, Mehak, Gull, Zain, Muhammad, Hafeez, Momina, Sidorov, Grigori
The use of derogatory terms in languages that employ code mixing, such as Roman Urdu, presents challenges for Natural Language Processing systems due to unstated grammar, inconsistent spelling, and a scarcity of labeled data. In this work, we propose a QLoRA based fine tuning framework to improve offensive language detection in Roman Urdu-English text. We translated the Roman Urdu-English code mixed dataset into English using Google Translate to leverage English LLMs, while acknowledging that this translation reduces direct engagement with code mixing features. Our focus is on classification performance using English translated low resource inputs. We fine tuned several transformers and large language models, including Meta LLaMA 3 8B, Mistral 7B v0.1, LLaMA 2 7B, ModernBERT, and RoBERTa, with QLoRA for memory efficient adaptation. Models were trained and evaluated on a manually annotated Roman Urdu dataset for offensive vs non offensive content. Of all tested models, the highest F1 score of 91.45 was attained by Meta LLaMA 3 8B, followed by Mistral 7B at 89.66, surpassing traditional transformer baselines. These results demonstrate the efficacy of QLoRA in fine tuning high performing models for low resource environments such as code mixed offensive language detection, and confirm the potential of LLMs for this task. This work advances a scalable approach to Roman Urdu moderation and paves the way for future multilingual offensive detection systems based on LLMs.
- Europe > Switzerland (0.04)
- Asia > Pakistan (0.04)
- North America > Mexico (0.04)
- Asia > India (0.04)